67 research outputs found

    Extracting Build Changes with BUILDDIFF

    Full text link
    Build systems are an essential part of modern software engineering projects. As software projects change continuously, it is crucial to understand how the build system changes because neglecting its maintenance can lead to expensive build breakage. Recent studies have investigated the (co-)evolution of build configurations and reasons for build breakage, but they did this only on a coarse grained level. In this paper, we present BUILDDIFF, an approach to extract detailed build changes from MAVEN build files and classify them into 95 change types. In a manual evaluation of 400 build changing commits, we show that BUILDDIFF can extract and classify build changes with an average precision and recall of 0.96 and 0.98, respectively. We then present two studies using the build changes extracted from 30 open source Java projects to study the frequency and time of build changes. The results show that the top 10 most frequent change types account for 73% of the build changes. Among them, changes to version numbers and changes to dependencies of the projects occur most frequently. Furthermore, our results show that build changes occur frequently around releases. With these results, we provide the basis for further research, such as for analyzing the (co-)evolution of build files with other artifacts or improving effort estimation approaches. Furthermore, our detailed change information enables improvements of refactoring approaches for build configurations and improvements of models to identify error-prone build files.Comment: Accepted at the International Conference of Mining Software Repositories (MSR), 201

    Incentive-Based Software Security: Fair Micro-Payments for Writing Secure Code

    Full text link
    We describe a mechanism to create fair and explainable incentives for software developers to reward contributions to security of a product. We use cooperative game theory to model the actions of the developer team inside a risk management workflow, considering the team to actively work against known threats, and thereby receive micro-payments based on their performance. The use of the Shapley-value provides natural explanations here directly through (new) interpretations of the axiomatic grounding of the imputation. The resulting mechanism is straightforward to implement, and relies on standard tools from collaborative software development, such as are available for git repositories and mining thereof. The micropayment model itself is deterministic and does not rely on uncertain information outside the scope of the developer team or the enterprise, hence is void of assumptions about adversarial incentives, or user behavior, up to their role in the risk management process that the mechanism is part of. We corroborate our model with a worked example based on real-life data.Comment: presented as a poster at GameSec 2023 (www.gamesec-conf.org

    Studying Late Propagations in Code Clone Evolution Using Software Repository Mining

    Get PDF
    In the code clone evolution community, the Late Propagation (LP) has been identified as one of the clone evolution patterns that can potentially lead to software defects. An LP occurs when instances of a clone pair are changed consistently, but not at the same time. The clone instance, which receives the update at a later time, might exhibit unintended behavior if the modification was a bugfix. In this paper, we present an approach to extract LPs from software repositories. Subsequently, we study LPs in four software systems, which allows us to investigate the propagation time, the clone dispersion and the effects of LPs on the software

    PASDA: A Partition-based Semantic Differencing Approach with Best Effort Classification of Undecided Cases

    Full text link
    Equivalence checking is used to verify whether two programs produce equivalent outputs when given equivalent inputs. Research in this field mainly focused on improving equivalence checking accuracy and runtime performance. However, for program pairs that cannot be proven to be either equivalent or non-equivalent, existing approaches only report a classification result of "unknown", which provides no information regarding the programs' non-/equivalence. In this paper, we introduce PASDA, our partition-based semantic differencing approach with best effort classification of undecided cases. While PASDA aims to formally prove non-/equivalence of analyzed program pairs using a variant of differential symbolic execution, its main novelty lies in its handling of cases for which no formal non-/equivalence proof can be found. For such cases, PASDA provides a best effort equivalence classification based on a set of classification heuristics. We evaluated PASDA with an existing benchmark consisting of 141 non-/equivalent program pairs. PASDA correctly classified 61-74% of these cases at timeouts from 10 seconds to 3600 seconds. Thus, PASDA achieved equivalence checking accuracies that are 3-7% higher than the best results achieved by three existing tools. Furthermore, PASDA's best effort classifications were correct for 70-75% of equivalent and 55-85% of non-equivalent cases across the different timeouts

    Microservice API Evolution in Practice: A Study on Strategies and Challenges

    Full text link
    Nowadays, many companies design and develop their software systems as a set of loosely coupled microservices that communicate via their Application Programming Interfaces (APIs). While the loose coupling improves maintainability, scalability, and fault tolerance, it poses new challenges to the API evolution process. Related works identified communication and integration as major API evolution challenges but did not provide the underlying reasons and research directions to mitigate them. In this paper, we aim to identify microservice API evolution strategies and challenges in practice and gain a broader perspective of their relationships. We conducted 17 semi-structured interviews with developers, architects, and managers in 11 companies and analyzed the interviews with open coding used in grounded theory. In total, we identified six strategies and six challenges for REpresentational State Transfer (REST) and event-driven communication via message brokers. The strategies mainly focus on API backward compatibility, versioning, and close collaboration between teams. The challenges include change impact analysis efforts, ineffective communication of changes, and consumer reliance on outdated versions, leading to API design degradation. We defined two important problems in microservice API evolution resulting from the challenges and their coping strategies: tight organizational coupling and consumer lock-in. To mitigate these two problems, we propose automating the change impact analysis and investigating effective communication of changes as open research directions

    Verifying temporal specifications of Java programs

    Get PDF
    Many Java programs encode temporal behaviors in their source code, typically mixing three features provided by the Java language: (1) pausing the execution for a limited amount of time, (2) waiting for an event that has to occur before a deadline expires, and (3) comparing timestamps. In this work, we show how to exploit modern SMT solvers together with static analysis in order to produce a network of timed automata approximating the temporal behavior of a set of Java threads. We also prove that the presented abstraction preserves the truth of MTL and ATCTL formulae, two well-known logics for expressing timed specifications. As far as we know, this is the first feasible approach enabling the user to automatically model check timed specifications of Java software directly from the source code

    Robot Teardown, Stripping Industrial Robots for Good

    Get PDF
    Building a robot requires a careful selection of components that interact across networks while meeting timing deadlines. Given the complexity associated, as robots get damaged or security compromised, their components will increasingly require updates and replacements. Contrary to the expectations and similar to Ford in the 1920s with cars, most robot manufacturers oppose to this. They employ planned obsolescence practices organizing dealers and system integrators into "private networks", providing repair parts only to "certified" companies to discourage repairs and evade competition. In this article, we introduce and advocate for robot teardown as an approach to study robot hardware architectures and fuel security research. We show how teardown can help understanding the underlying hardware and demonstrate how our approach can help researchers uncovering security vulnerabilities. Our case studies show how robot teardown becomes an essential practice to security in robotics, helping us identify and report a total of 100 security flaws with 17 new CVE IDs over a period of two years. Lastly, we finalize by demonstrating how, through teardown, planned obsolescence hardware limitations can be identified and bypassed obtaining full control of the hardware, which poses both a threat to the robot manufacturers' business model as well as a security threat
    • …
    corecore